Low-light raw video denoising with realistic motion
Fig.1. Several representative examples for low/normal-light images in the LLRVD dataset.
Low-light raw video denoising with realistic motion
With the swift advancement of smartphone cameras and a growing demand for night scene video shooting, low-light videography has become critically important. However, in low-light conditions, noise is almost inevitable due to the low photon count, which significantly degrades video quality. To address this, various hardware solutions have been attempted to collect more photons, such as employing larger apertures, using flashlights, or taking long-exposure images. Despite these efforts, limitations exist due to the camera size, particularly in smartphones, and the ineffectiveness of such solutions in dynamic scenes.
In response to these challenges, we will use the low-light raw video enhancement dataset proposed by Prof. Fu’s team in [1]. They present a specialized dataset aimed at improving low-light raw video denoising that accounts for realistic motion. This dataset is distinctive because it captures complex, real-world motion rather than relying on artificial or static scenes. It provides high-quality noisy-to-clean image pairs, crucial for training and evaluating advanced denoising algorithms, and addresses the pressing need for datasets that reflect the conditions of actual low-light environments with realistic movement
We will host the competition using open source online platform, e.g. CodaLab. All submissions are evaluated by our script running on the server and we will double check the results of top-rank methods manually before releasing the final test-set rating.